Model Selection for Mixture Models Using Perfect Sample
نویسندگان
چکیده مقاله:
We have considered a perfect sample method for model selection of finite mixture models with either known (fixed) or unknown number of components which can be applied in the most general setting with assumptions on the relation between the rival models and the true distribution. It is, both, one or neither to be well-specified or mis-specified, they may be nested or non-nested. We consider mixture distribution as a complete-data (bivariate) distribution by prediction of missing data variable (unobserved variable) and show that this ideas is applicable to use Vuong's test for select optimum mixture model when number of components are known (fixed) or unknown. We have considered AIC and BIC based on the complete-data distribution. The performance of this method is evaluated by Monte-Carlo method and real data set, as Total Energy Production.
منابع مشابه
Generating Gaussian Mixture Models by Model Selection For Speech Recognition
While all modern speech recognition systems use Gaussian mixture models, there is no standard method to determine the number of mixture components. Current choices for mixture component numbers are usually arbitrary with little justification. In this paper we apply some common model selection methods to determine the number of mixture components. We show that they are ill-suited for the speech ...
متن کاملModels for Sample Selection Bias
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive o...
متن کاملFbst for Mixture Model Selection
The Fully Bayesian Significance Test (FBST) is a coherent Bayesian significance test for sharp hypotheses. The computation of the evidence measure used on the FBST is performed in two steps: 1) The optimization step consists of finding f∗, the maximum (supremum) of the posterior under the null hypothesis. 2) The integration step consists of integrating the posterior density over the Tangential ...
متن کاملPerfect posterior simulation for mixture and hidden Markov models
In this paper we present an application of the read-once coupling from the past algorithm to problems in Bayesian inference for latent statistical models. We describe a method for perfect simulation from the posterior distribution of the unknown mixture weights in a mixture model. Our method is extended to a more general mixture problem, where unknown parameters exist for the mixture components...
متن کاملAn improved cluster model selection method for agglomerative hierarchical speaker clustering using incremental Gaussian mixture models
In this paper, we improve our previous cluster model selection method for agglomerative hierarchical speaker clustering (AHSC) based on incremental Gaussian mixture models (iGMMs). In the previous work, we measured the likelihood of all the data points in a given cluster for each mixture component of the GMM modeling the cluster. Then, we selected the N -best component Gaussians with the highes...
متن کاملمنابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ذخیره در منابع من قبلا به منابع من ذحیره شده{@ msg_add @}
عنوان ژورنال
دوره 15 شماره 2
صفحات 173- 212
تاریخ انتشار 2019-03
با دنبال کردن یک ژورنال هنگامی که شماره جدید این ژورنال منتشر می شود به شما از طریق ایمیل اطلاع داده می شود.
کلمات کلیدی برای این مقاله ارائه نشده است
میزبانی شده توسط پلتفرم ابری doprax.com
copyright © 2015-2023